Moderate: Red Hat Ceph Storage 5.3 Security update

Related Vulnerabilities: CVE-2023-43040   CVE-2023-46159  

Synopsis

Moderate: Red Hat Ceph Storage 5.3 Security update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

An update is now available for Red Hat Ceph Storage 5.3 in the Red Hat
Ecosystem Catalog.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.

These updated packages include numerous bug fixes. Space precludes
documenting all of these changes in this advisory. Users are directed to
the Red Hat Ceph Storage Release Notes for information on the most
significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.3/html/release_notes/index

All users of Red Hat Ceph Storage are advised to update to these packages
that provide various bug fixes.

Security Fix(es):

  • rgw: improperly verified POST keys (CVE-2023-43040)
  • ceph: RGW crash upon misconfigured CORS rule (CVE-2023-46159)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Solution

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/upgrade_guide/index#upgrade-a-red-hat-ceph-storage-cluster-using-cephadm

For supported configurations, refer to:

https://access.redhat.com/articles/1548993

Affected Products

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Ceph Storage (OSD) 5 for RHEL 8 x86_64
  • Red Hat Ceph Storage (MON) 5 for RHEL 8 x86_64

Fixes

  • BZ - 2153448 - [cee/sd][cephadm-adopt.yml] fail in TASK [manage nodes with cephadm - ipv6] Error EINVAL: Cannot resolve ip for host [<ipv6>]: [Errno -2] Name or service not known
  • BZ - 2193419 - bindmount tcmu.conf from host into the container
  • BZ - 2211758 - [RHCS 5.3z] multiple scrub and deep-scrub start message repeating for a same PG
  • BZ - 2215374 - CVE-2023-46159 ceph: RGW crash upon misconfigured CORS rule
  • BZ - 2215380 - CVE-2023-46159 ceph: RGW crash upon misconfigured CORS rule [ceph-5-default]
  • BZ - 2216855 - CVE-2023-43040 rgw: improperly verified POST keys
  • BZ - 2216857 - librgw2: rgw: improperly verified POST keys [ceph-5-default]
  • BZ - 2224636 - [rgw][rfe]: Object reindex tool should recover the index for 'versioned' buckets. (5.3z5)
  • BZ - 2227806 - snap-schedule: allow retention spec to specify max number of snaps to retain
  • BZ - 2227810 - mgr/snap_schedule: catch all exceptions to avoid crashing module
  • BZ - 2227997 - client: issue a cap release immediately if no cap exists
  • BZ - 2228001 - mds: do not send split_realms for CEPH_SNAP_OP_UPDATE msg
  • BZ - 2228039 - mds: do not evict clients if OSDs are laggy
  • BZ - 2231469 - [GSS] Ceph health reports stray daemon(s) and stray host(s) after running the "cephadm-adopt.yml" playbook
  • BZ - 2232164 - unicode decode errors break librados object iteration
  • BZ - 2233444 - [cee/sd][ceph-ansible] Cephadm-preflight playbook stops all the ceph services from node if older ceph rpms are present on the host.
  • BZ - 2233886 - [backport for 5.3.z] (mds.1): 3 slow requests are blocked
  • BZ - 2234610 - pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread
  • BZ - 2236190 - [GSS][backport for 5.3.z] CephFS blocked requests with warning 1 MDSs behind on trimming
  • BZ - 2237391 - [RHCS5.3 backport][cee/sd][cephfs][dashboard]While evicting one client via ceph dashboard, it evicts all other client mounts of the ceph filesystem
  • BZ - 2237880 - [5.3.z6 backport][cee/sd][BlueFS][RHCS 5.x] no BlueFS spillover health warning in RHCS 5.x
  • BZ - 2238665 - mds: blocklist clients with "bloated" session metadata
  • BZ - 2239149 - [CEE/sd][RGW] RGWSI_Notify::robust_notify(const DoutPrefixProvider*, RGWSI_RADOS::Obj&, const RGWCacheNotifyInfo&, optional_yield):402 Notify failed on object: (110) Connection timed out
  • BZ - 2239433 - [RHCS 5.3][Slower bucket listing in RHCS 5.3z1 after resharding]
  • BZ - 2239455 - [RHCS-5.X backport] [RFE] BLK/Kernel: Improve protection against running one OSD twice
  • BZ - 2240089 - [RGW][RFE]include versioning details of the bucket in radosgw-admin bucket stats command [5.3]
  • BZ - 2240144 - libcephsqlite may corrupt data from short reads
  • BZ - 2240586 - pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output
  • BZ - 2240727 - [cephfs] subvolume group delete did not allow.
  • BZ - 2240839 - [5.3 backport][RADOS] "currently delayed" slow ops does not provide details on why op has been delayed
  • BZ - 2240977 - [RHCS5.3][RGW log size quickly increasing since upgrading to RHEL 9]
  • BZ - 2244868 - [RHCS 5.3.x clone]: MDS: "1 MDSs behind on trimming" and "2 clients failing to respond to cache pressure".
  • BZ - 2245335 - [rgw][indexless]: on Indexless placement, rgw daemon crashes with " ceph_assert(index.type == BucketIndexType::Normal)" (5.3)
  • BZ - 2245699 - radosgw-admin crashes when using --placement-id
  • BZ - 2247232 - [cee/sd][ceph-mon] ceph-mon does not handle gracefully wrong syntax of "ceph health mute" command
  • BZ - 2248825 - [cee/sd][cephfs] mds pods are crashing with ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
  • BZ - 2249014 - It seems osd_memory_autotune does not work or at least config DB host:target is not honored.
  • BZ - 2249017 - [CEE/sd][cephadm-ansible] Unable to get global configuration values via cephadm-ansible module ceph_config on RHCS 5.3z3
  • BZ - 2249565 - MDS slow requests for the internal 'rename' requests
  • BZ - 2249571 - client: queue a delay cap flushing if there are ditry caps/snapcaps
  • BZ - 2251768 - [GSS] [5.3.z backport] We can not get attributes (getattr) for a specific inode which makes production workload hang.
  • BZ - 2252781 - Resurrect "rados cppool" requiring --yes-i-really-mean-it for pools with selfmanaged snaphots
  • BZ - 2253672 - [RHCS 5.3] [GSS] ceph_abort_msg("past_interval start interval mismatch")
  • BZ - 2255035 - [RHCS 5.2][The command `ceph mds metadata` doesn't list information for the active MDS server]
  • BZ - 2255436 - [RHCS 5] RFE: change default value of "mds_bal_interval" to "0", aka false
  • BZ - 2256172 - [5.3z6][rgw-archive]: On archive zone, bucket versioning shows disabled on 5.3z6 even though it is enabled
  • BZ - 2257421 - [5.3 backport] 1 MDSs report oversized cache keeps reappearing
  • BZ - 2259297 - mds does not update perfcounters during replay